Planning with State Uncertainty via Contingency Planning and Execution Monitoring
نویسندگان
چکیده
This paper proposes a fast alternative to POMDP planning for domains with deterministic state-changing actions but probabilistic observation-making actions and partial observability of the initial state. This class of planning problems, which we call quasi-deterministic problems, includes important realworld domains such as planning for Mars rovers. The approach we take is to reformulate the quasi-deterministic problem into a completely observable problem and build a contingency plan where branches in the plan occur wherever an observational action is used to determine the value of some state variable. The plan for the completely observable problem is constructed assuming that state variables can be determined exactly at execution time using these observational actions. Since this is often not the case due to imperfect sensing of the world, we then use execution monitoring to select additional actions at execution time to determine the value of the state variable sufficiently accurately. We use a value of information calculation to determine which information gathering actions to perform and when to stop gathering information and continue execution of the branching plan. We show empirically that while the plans found are not optimal, they can be generated much faster, and are of better quality than other approaches.
منابع مشابه
Complete Contingency Planners
A framework is proposed for the investigation of planning systems that must deal with bounded uncertainty. A deenition of this new class of contingency planners is given. A general, complete contingency planning algorithm is described. The algorithm is suitable to many incomplete information games as well as planning situations where the initial state is only partially known. A rich domain is i...
متن کاملIntegrated Inspection Planning and Preventive Maintenance for a Markov Deteriorating System Under Scenario-based Demand Uncertainty
In this paper, a single-product, single-machine system under Markovian deterioration of machine condition and demand uncertainty is studied. The objective is to find the optimal intervals for inspection and preventive maintenance activities in a condition-based maintenance planning with discrete monitoring framework. At first, a stochastic dynamic programming model whose state variable is the ...
متن کاملExecution Monitoring to Improve Plans with Information Gathering
There has been much recent interest in planning problems with deterministic actions but stochastic observations. Examples include Mars rover planning, robot monitoring tasks and the Rocksample domain from the planning competition. However, theoretical results show that in general these problems are as hard as solving partially observable Markov decision problems (POMDPs). We propose an approach...
متن کاملIntegrating Planning, Execution, and Learning to Improve Plan Execution
Algorithms for planning under uncertainty require accurate action models that explicitly capture the uncertainty of the environment. Unfortunately, obtaining these models is usually complex. In environments with uncertainty, actions may produce countless outcomes and hence, specifying them and their probability is a hard task. As a consequence, when implementing agents with planning capabilitie...
متن کاملLinear Time Varying MPC Based Path Planning of an Autonomous Vehicle via Convex Optimization
In this paper a new method is introduced for path planning of an autonomous vehicle. In this method, the environment is considered cluttered and with some uncertainty sources. Thus, the state of detected object should be estimated using an optimal filter. To do so, the state distribution is assumed Gaussian. Thus the state vector is estimated by a Kalman filter at each time step. The estimation...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2011